AAAI AI-Alert Ethics for Aug 3, 2021
Five early reflections on the EU's proposed legal framework for AI
As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting "the global gold standard" for regulating AI. This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services.
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Law > Statutes (0.74)
Study warns of compliance costs for regulating Artificial Intelligence
The EU's forthcoming regulation on Artificial Intelligence could cost the bloc's economy up to €31 billion over the next 5 years and cause investments to shrink by as much as 20%, according to a study published on Monday (26 July). The assessment by the Centre for Data Innovation looked into the administrative costs of the Artificial Intelligence Act (AIA), a horizontal EU regulation to introduce increasing obligations based on the level of risk associated with the application of the groundbreaking technology. The study author stresses the administrative burden the new legislation is expected to create, which they say will disincentivise innovation and technology uptake. "The Commission has repeatedly asserted that the draft AI legislation will support growth and innovation in Europe's digital economy, but a realistic economic analysis suggests that argument is disingenuous at best," said senior policy analyst and report author Ben Mueller. That goal would require roughly 10 times the level of current investment in the technology, yet the study author says compliance costs would eat up just under 20% of those investments.
- Government (1.00)
- Law > Statutes (0.60)
What AI Experts Fear from AI
These are some of the outcomes that AI developers fear will come from their work, according to a new report issued today by the Deloitte AI Institute and the U.S. Chamber of Commerce. Titled "Investing in trustworthy AI," the 82-page report from Deloitte and the Chamber Technology Engagement Center sought to identify the concerns that technology experts have when it comes to the adoption of AI, as well as highlight the impact that government investment in AI can have on the emerging technology. Algorithmic bias and a lack of humans in decision loops are concerns for about two-thirds of the 250 people who participated in the survey. Another 60% identified "rogue or unanticipated behavior" of autonomous agents as a threat, while 56% said the lack of explainability of algorithms was a concern. "Perceived, and actual, discrimination by AI systems undermines the confidence individuals have in whether they are being given a fair opportunity when AI is involved," the report stated.
- Questionnaire & Opinion Survey (0.54)
- Research Report (0.37)
- Government (1.00)
- Law > Intellectual Property & Technology Law (0.32)